Every piece of software is communication. Between people, between machines, and between the two.

That’s not a new idea. But lately I’ve been paying more attention to who is actually talking to whom, and it doesn’t feel as simple as it used to.

From Quadrant to Triangle

I used to think about this pretty cleanly: two actors — humans and machines — each sending and receiving. That gave four combinations:
 

 → Human→ Machine
Human →Brainstorming, docs, code reviews, ...Programming languages, UIs, SQL, ...
Machine →Dashboards, logs, error messages, ...APIs, protocols, message queues, ...

That model held up for a long time.

But after spending a lot of time with AI coding assistants, it started to feel incomplete. AI doesn’t quite fit into either category. It behaves like both, depending on the situation, and sometimes like neither.

Thinking about it as a triangle has been more useful: humans, traditional machines, and AI systems. Each pair communicates differently.

Human ↔ Human: Still the Base Layer

Most of this hasn’t changed. We still pretend to write docs, argue in code reviews, and try to agree on what “intuitive” means.

What has changed for me is how I think about code as communication between humans.

Watching an AI try to understand a codebase feels a lot like watching a new teammate onboard. The parts that are clear — good naming, structure, explicit intent — work fine. The parts that rely on shared context or “obvious” assumptions fall apart.

That’s been useful. It exposes the same weak spots I’d expect a human to struggle with, just faster and more consistently.

Lately when I refactor, I sometimes ask: would someone (human or not) understand what this is doing without extra context? It’s a simple question, but it cuts through a lot of noise.

Human ↔ Machine: More Layers in Between

The basics are the same. We still use programming languages, interfaces, queries — all the usual translation layers.

What’s different is that there’s now often an extra step in the middle.

I describe what I want. The AI turns that into code. The machine runs it.

So the flow becomes: human → AI → machine.

That middle step is surprisingly flexible. I can describe behavior in loose terms and still get something usable back. Doing that directly in a programming language would take much more effort.

It works the other way too. When I hit an error or run into unfamiliar code, I often ask the AI to explain it. It sits between the machine output and my understanding.

It’s basically acting as a translator in both directions.

Human ↔ AI: Close to English, Not Quite

Talking to AI feels like using natural language, but it doesn’t behave exactly like a normal conversation.

Over time, I’ve noticed I phrase things differently. I add structure, examples, constraints — things I wouldn’t always spell out to another person. Certain patterns (“break this down step by step”, “assume X”) consistently help, even though they’d sound a bit unnatural in human conversation.

It still looks like English, but it’s not quite the same thing.

On the other side, AI responses have their own quirks. Everything comes back with the same level of confidence. There’s no tone to read into, no hesitation, no “this might be wrong.” You have to build your own sense of how much to trust it.

Machine ↔ AI: Strict One Way, Flexible the Other

When AI talks to machines, it uses the same interfaces we always have: APIs, SQL, structured data. That side is still strict — things either match the expected format or they don’t.

But in the other direction, AI is much more forgiving.

It can work with messy input — incomplete data, inconsistent formats, things that would normally just cause a failure. Instead of breaking, it tries to make sense of what’s there.

That tolerance changes how the interaction feels. It’s not as brittle as traditional machine interfaces, but it’s not quite human-level understanding either.

AI ↔ AI: Meeting in the Middle

When AI systems communicate with each other, they often do it through human language.

Which is a bit strange, if you think about it. These systems operate internally in ways that have nothing to do with language as we use it, but they fall back to it as a shared format.

Part of that is practical — different systems don’t share the same internal representations. But it also means that, in a way, they meet in a space that was originally designed for us.

What’s Actually Changing

One thing I keep coming back to is how much of the old model was still centered on humans. Even machine-to-machine communication was something we designed, for systems we understood.

AI shifts that a bit.

It can interpret, adapt, and generate without everything being explicitly specified upfront. Not perfectly, but enough to change how you approach problems.

I’ve noticed this in small ways:

  • I spend more time describing what I want, less time spelling out every step
  • When debugging, I start with symptoms instead of tracing everything manually
  • When designing interfaces, I think about how understandable they are, not just how correct

None of that is strictly about AI. It’s mostly about clarity.

But using AI makes the gaps in clarity harder to ignore. If something is vague, inconsistent, or overly dependent on context, it shows up quickly.

Closing Thought

Communication is still the core of all of this. That part hasn’t changed.

What’s different is the set of participants and how they relate to each other. The simple model I had before doesn’t quite hold anymore, and the newer one is a bit messier.

But it also feels more flexible. And honestly, quite interesting to work in.